130 research outputs found

    Emergence of Gabor-like Receptive Fields in a Recurrent Network of Mixed-Signal Silicon Neurons

    Full text link
    Mixed signal analog/digital neuromorphic circuits offer an ideal computational substrate for testing and validating hypotheses about models of sensory processing, as they are affected by low resolution, variability, and other limitations that affect in a similar way real neural circuits. In addition, their real-time response properties allow to test these models in closed-loop sensory-processing hardware setups and to get an immediate feedback on the effect of different parameter settings. Within this context we developed a recurrent neural network architecture based on a model of the retinocortical visual pathway to obtain neurons highly tuned to oriented visual stimuli along a specific direction and with a specific spatial frequency, with Gabor-like receptive fields. The computation performed by the retina is emulated by a Dynamic Vision Sensor (DVS) while the following feed-forward and recurrent processing stages are implemented by a Dynamic Neuromorphic Asynchronous Processor (DYNAP) chip that comprises adaptive integrate-and fire neurons and dynamic synapses. We show how the network implemented on this device gives rise to neurons tuned to specific orientations and spatial frequencies, independent of the temporal frequency of the visual stimulus. Compared to alternative feed-forward schemes, the model proposed produces highly structured receptive fields with a limited number of synaptic connections, thus optimizing hardware resources. We validate the model and approach proposed with experimental results using both synthetic and natural images

    Virtual Reality to Simulate Visual Tasks for Robotic Systems

    Get PDF
    Virtual reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system

    Modelling Short-Latency Disparity-Vergence Eye Movements Under Dichoptic Unbalanced Stimulation

    Get PDF
    Vergence eye movements align the optical axes of our two eyes onto an object of interest, thus facilitating the binocular summation of the images projected onto the left and the right retinae into a single percept. Both the computational substrate and the functional behaviour of binocular vergence eye movements have been the topic of in depth investigation. Here, we attempt to bring together what is known about computation and function of vergence mechanism. To this aim, we evaluated of a biologically inspired model of horizontal and vertical vergence control, based on a network of V1 simple and complex cells. The model performances were compared to that of human observers, with dichoptic stimuli characterized by a varying amounts of interocular correlation, interocular contrast, and vertical disparity. The model provides a qualitative explanation of psychophysiological data. Nevertheless, human vergence response to interocular contrast differs from model’s behavior, suggesting that the proposed disparity-vergence model may be improved to account for human behavior. More than this, this observation also highlights how dichoptic unbalanced stimulation can be used to investigate the significant but neglected role of sensory processing in motor planning of eye movements in depth

    Phase-Based Binocular Perception of Motion in Depth: Cortical-Like Operators and Analog VLSI Architectures

    Get PDF
    We present a cortical-like strategy to obtain reliable estimates of the motions of objects in a scene toward/away from the observer (motion in depth), from local measurements of binocular parameters derived from direct comparison of the results of monocular spatiotemporal filtering operations performed on stereo image pairs. This approach is suitable for a hardware implementation, in which such parameters can be gained via a feedforward computation (i.e., collection, comparison, and punctual operations) on the outputs of the nodes of recurrent VLSI lattice networks, performing local computations. These networks act as efficient computational structures for embedded analog filtering operations in smart vision sensors. Extensive simulations on both synthetic and real-world image sequences prove the validity of the approach that allows to gain high-level information about the 3D structure of the scene, directly from sensorial data, without resorting to explicit scene reconstruction

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors
    corecore